78 research outputs found

    On Nondeterministic Derandomization of {F}reivalds' Algorithm: {C}onsequences, Avenues and Algorithmic Progress

    Get PDF
    Motivated by studying the power of randomness, certifying algorithms and barriers for fine-grained reductions, we investigate the question whether the multiplication of two n×nn\times n matrices can be performed in near-optimal nondeterministic time O~(n2)\tilde{O}(n^2). Since a classic algorithm due to Freivalds verifies correctness of matrix products probabilistically in time O(n2)O(n^2), our question is a relaxation of the open problem of derandomizing Freivalds' algorithm. We discuss consequences of a positive or negative resolution of this problem and provide potential avenues towards resolving it. Particularly, we show that sufficiently fast deterministic verifiers for 3SUM or univariate polynomial identity testing yield faster deterministic verifiers for matrix multiplication. Furthermore, we present the partial algorithmic progress that distinguishing whether an integer matrix product is correct or contains between 1 and nn erroneous entries can be performed in time O~(n2)\tilde{O}(n^2) -- interestingly, the difficult case of deterministic matrix product verification is not a problem of "finding a needle in the haystack", but rather cancellation effects in the presence of many errors. Our main technical contribution is a deterministic algorithm that corrects an integer matrix product containing at most tt errors in time O~(tn2+t2)\tilde{O}(\sqrt{t} n^2 + t^2). To obtain this result, we show how to compute an integer matrix product with at most tt nonzeroes in the same running time. This improves upon known deterministic output-sensitive integer matrix multiplication algorithms for t=Ω(n2/3)t = \Omega(n^{2/3}) nonzeroes, which is of independent interest

    Polygon Placement Revisited: (Degree of Freedom + 1)-SUM Hardness and an Improvement via Offline Dynamic Rectangle Union

    Get PDF
    We revisit the classical problem of determining the largest copy of a simple polygon PP that can be placed into a simple polygon QQ. Despite significant effort, known algorithms require high polynomial running times. (Barequet and Har-Peled, 2001) give a lower bound of n2o(1)n^{2-o(1)} under the 3SUM conjecture when PP and QQ are (convex) polygons with Θ(n)\Theta(n) vertices each. This leaves open whether we can establish (1) hardness beyond quadratic time and (2) any superlinear bound for constant-sized PP or QQ. In this paper, we affirmatively answer these questions under the kkSUM conjecture, proving natural hardness results that increase with each degree of freedom (scaling, xx-translation, yy-translation, rotation): (1) Finding the largest copy of PP that can be xx-translated into QQ requires time n2o(1)n^{2-o(1)} under the 3SUM conjecture. (2) Finding the largest copy of PP that can be arbitrarily translated into QQ requires time n2o(1)n^{2-o(1)} under the 4SUM conjecture. (3) The above lower bounds are almost tight when one of the polygons is of constant size: we obtain an O~((pq)2.5)\tilde O((pq)^{2.5})-time algorithm for orthogonal polygons P,QP,Q with pp and qq vertices, respectively. (4) Finding the largest copy of PP that can be arbitrarily rotated and translated into QQ requires time n3o(1)n^{3-o(1)} under the 5SUM conjecture. We are not aware of any other such natural ((degree of freedom +1)+ 1)-SUM hardness for a geometric optimization problem

    Finding Small Satisfying Assignments Faster Than Brute Force: {A} Fine-grained Perspective into {B}oolean Constraint Satisfaction

    Get PDF
    To study the question under which circumstances small solutions can be found faster than by exhaustive search (and by how much), we study the fine-grained complexity of Boolean constraint satisfaction with size constraint exactly kk. More precisely, we aim to determine, for any finite constraint family, the optimal running time f(k)ng(k)f(k)n^{g(k)} required to find satisfying assignments that set precisely kk of the nn variables to 11. Under central hardness assumptions on detecting cliques in graphs and 3-uniform hypergraphs, we give an almost tight characterization of g(k)g(k) into four regimes: (1) Brute force is essentially best-possible, i.e., g(k)=(1±o(1))kg(k) = (1\pm o(1))k, (2) the best algorithms are as fast as current kk-clique algorithms, i.e., g(k)=(ω/3±o(1))kg(k)=(\omega/3\pm o(1))k, (3) the exponent has sublinear dependence on kk with g(k)[Ω(k3),O(k)]g(k) \in [\Omega(\sqrt[3]{k}), O(\sqrt{k})], or (4) the problem is fixed-parameter tractable, i.e., g(k)=O(1)g(k) = O(1). This yields a more fine-grained perspective than a previous FPT/W[1]-hardness dichotomy (Marx, Computational Complexity 2005). Our most interesting technical contribution is a f(k)n4kf(k)n^{4\sqrt{k}}-time algorithm for SubsetSum with precedence constraints parameterized by the target kk -- particularly the approach, based on generalizing a bound on the Frobenius coin problem to a setting with precedence constraints, might be of independent interest

    Multivariate Fine-Grained Complexity of Longest Common Subsequence

    No full text
    We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings xx and yy of length nn, a textbook algorithm solves LCS in time O(n2)O(n^2), but although much effort has been spent, no O(n2ε)O(n^{2-\varepsilon})-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size n:=max{x,y}n:=\max\{|x|,|y|\}, the length of the shorter string m:=min{x,y}m:=\min\{|x|,|y|\}, the length LL of an LCS of xx and yy, the numbers of deletions δ:=mL\delta := m-L and Δ:=nL\Delta := n-L, the alphabet size, as well as the numbers of matching pairs MM and dominant pairs dd. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as (n+min{d,δΔ,δm})1±o(1)(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}. [...

    Discrete {F}r\'{e}chet Distance under Translation: {C}onditional Hardness and an Improved Algorithm

    Get PDF

    Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-And-Solve

    No full text
    Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size nn of data that originally has size NN, and we want to solve a problem with time complexity T()T(\cdot). The naive strategy of "decompress-and-solve" gives time T(N)T(N), whereas "the gold standard" is time T(n)T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: - The O(nNlogN/n)O(nN\sqrt{\log{N/n}}) bound for LCS and the O(min{NlogN,nM})O(\min\{N \log N, nM\}) bound for Pattern Matching with Wildcards are optimal up to No(1)N^{o(1)} factors, under the Strong Exponential Time Hypothesis. (Here, MM denotes the uncompressed length of the compressed pattern.) - Decompress-and-solve is essentially optimal for Context-Free Grammar Parsing and RNA Folding, under the kk-Clique conjecture. - We give an algorithm showing that decompress-and-solve is not optimal for Disjointness

    Fine-Grained Completeness for Optimization in P

    Get PDF
    We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the kk-XOR problem. Specifically, we define MaxSP as the class of problems definable as maxx1,,xk#{(y1,,y):ϕ(x1,,xk,y1,,y)}\max_{x_1,\dots,x_k} \#\{ (y_1,\dots,y_\ell) : \phi(x_1,\dots,x_k, y_1,\dots,y_\ell) \}, where ϕ\phi is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On mm-sized structures, we can solve each such problem in time O(mk+1)O(m^{k+\ell-1}). Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of O(mk+1)O(m^{k+\ell-1}) for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic cc-approximation would give a (c+ε)(c+\varepsilon)-approximation for all MaxSP/MinSP problems in time O(mk+1δ)O(m^{k+\ell-1-\delta}), where ε>0\varepsilon > 0 can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a O(1)O(1)-approximation for all MinSP problems in faster-than-O(mk+1)O(m^{k+\ell-1}) time
    corecore